Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Anthropic Launches New Model That Spots Zero Days, Makes Wall Street Traders Lose Their Minds:
Anthropic, the makers of the popular and code-competent chatbot Claude, released a new model Thursday called Claude Opus 4.6. The company is doubling down on coding capabilities, claiming that the new model "plans more carefully, sustains agentic tasks for longer, can operate more reliably in larger codebases, and has better code review and debugging skills to catch its own mistakes."
It seems the model is also pretty good at catching other people's mistakes. According to a report from Axios, Opus 4.6 was able to spot more than 500 previously undisclosed zero-day security vulnerabilities in open-source libraries during its testing period. It also reportedly did so without receiving specific prompting to go hunting for flaws—it just spotted and reported them.
That's a nice change of pace from all of the many developments that have been happening around OpenClaw, an open-source AI agent that most users have been running with Claude Opus 4.5. A number of vibe-coded projects that have come out of the community have had some pretty major security flaws. Maybe Anthropic's upgrade will be able to catch those issues before they become everyone else's problem.
Claude's calling card has been coding for some time now, but it seems Anthropic is looking to make a splash elsewhere with this update. The company said Opus 4.6 will be better at other work tasks like creating PowerPoint presentations and navigating documents in Excel. Seems those features will be key to Cowork, Anthropic's recent project that it is touting as "Claude Code" for non-technical workers.
It's also boasting that the model will have potential use in financial analysis, and it sure seems like the folks on Wall Street could use some help there. The general consensus among financial analysts this week is that Anthropic's Cowork models are spooking the stock market and playing a major factor in sending software stocks into a spiral. It's possible that this is what the market has been responding to—after all, the initial release of DeepSeek, the open-source AI model out of China, tanked the AI sector for a day or so, so it's not like these markets aren't overly sensitive.
But it seems unlikely that Opus 4.6 will fundamentally upend the market. Anthropic already holds a solid lead on the plurality of the enterprise market, according to a recent report from Menlo Ventures, and is well ahead of its top (publicly traded) competitors in the space—though OpenAI made its own play to cut into some market share earlier today with the launch of its Frontier platform for managing AI agents. If anything, Anthropic's new model seems like it'll help the company maintain its top spot for the time being. But if the stock market shock is any indication, one thing is for sure: the entire economy is completely pot-committed to the developments in AI. Surely that won't have any repercussions.
Astronomers discover the surprising reason for a star's disappearance
The steady beam of a star twice the size of the sun played a trick on astronomers about a year ago: It vanished.
Then some nine months later, it reappeared in the constellation Monoceros, about 3,200 light-years away in space.
Now researchers think they've solved the mystery of one of the longest star-dimming events ever recorded. The star, called ASASSN-24fw, may have disappeared behind a giant planet with an enormous system of rings, according to new research, blocking most of its light from reaching Earth for nine months.
The steady beam of a star twice the size of the sun played a trick on astronomers about a year ago: It vanished.
Then some nine months later, it reappeared in the constellation Monoceros, about 3,200 light-years away in space.
Now researchers think they've solved the mystery of one of the longest star-dimming events ever recorded. The star, called ASASSN-24fw, may have disappeared behind a giant planet with an enormous system of rings, according to new research, blocking most of its light from reaching Earth for nine months.
[...] The team's top explanation involves a brown dwarf surrounded by humongous rings, similar in shape to Saturn's but vastly larger, eclipsing the star. In this case, the rings are estimated to stretch about 15.8 million miles from the brown dwarf, about half the distance between the sun and Mercury.
As the ring system moved in front of the star, it blocked about 97 percent of ASASSN-24fw's light. By studying changes in the star's brightness and light patterns — methods astronomers use to infer mass and motion — the team estimates the hidden object weighs more than three times as much as Jupiter.
The data also suggest the star itself has leftover material close by, possibly debris from past or ongoing planetary collisions. That is unusual for a star believed to be more than a billion years old.
Journal Reference: Sarang Shah, Jonathan P Marshall, Carlos del Burgo, et al., The nature of ASASSN-24fw's occultation: modelling the event as dimming by optically thick rings around a substellar companion, Monthly Notices of the Royal Astronomical Society, Volume 546, Issue 3, March 2026, staf2251, https://doi.org/10.1093/mnras/staf2251
Today is SN's birthday - we are 12 years old!
The site published its first discussion on the 12 February 2014 but had to be reset a few days later because of software problems which had not been apparent until the community grew. But after 12 years I won't quibble over a few days difference.
This site would not exist without many people who wrote code, configured hardware, tested software, and squashed bugs. It would not be fair to try to name them - I would surely miss many who have been instrumental in getting us where we are today. We initially had a Board comprising of 'shareholders', but today we have a Board of volunteers. The running costs which once were around $6,000 pa are now almost zero due to the generosity of those who donate free hardware and the essential internet connection. Many others over the years have given freely of their time in various roles to keep this site running. No ads, no sponsorship, no commercial pressure.
But the most important people are you - the community. There are still many accounts active that have been with us from the beginning, but those that have joined sometime over the 12 years are equally important and just as welcome. We hope that you all find something of interest in at least some of the stories that we publish. Please keep commenting in them. And if you can, please make the occasional submissions that are essential for our continued operation.
Thank you - this is your site. So I raise my glass to SoylentNews, to this community and, hopefully, to the next 12 years!
Claude Opus 4.6 spends $20K trying to write a C compiler:
An Anthropic researcher's efforts to get its newly released Opus 4.6 model to build a C compiler left him "excited," "concerned," and "uneasy."
It also left many observers on GitHub skeptical, to say the least.
Nicholas Carlini, a researcher on Anthropic's Safeguards team, detailed the experiment with what he called "agent teams" in a blog that coincided with the official release of Opus 4.6.
He said he "tasked 16 agents with writing a Rust-based C compiler, from scratch, capable of compiling the Linux kernel. After nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V."
With agent teams, he said, "multiple Claude instances work in parallel on a shared codebase without active human intervention."
One key task was getting round the need for "an operator to be online and available to work jointly," which we presume means removing the need for Claude Code to wait for a human to tell it what to do next.
"To elicit sustained, autonomous progress, I built a harness that sticks Claude in a simple loop... When it finishes one task, it immediately picks up the next." Imagine if humans took that sort of approach.
Carlini continued: "I leave it up to each Claude agent to decide how to act. In most cases, Claude picks up the 'next most obvious' problem." This threw up a number of lessons, including the need to "write extremely high quality tests."
Readers were also advised to "put yourself in Claude's shoes." That means the "test harness should not print thousands of useless bytes" to make it easier for Claude to find what it needs.
Also, "Claude can't tell time and, left alone, will happily spend hours running tests instead of making progress."
Which might make you feel working with Claude is closer to working with a regular human than you might have thought. But what was the upshot of all of this?
"Over nearly 2,000 Claude Code sessions across two weeks, Opus 4.6 consumed 2 billion input tokens and generated 140 million output tokens, a total cost just under $20,000."
This made it "an extremely expensive project" compared to the priciest Claude Max plans, Carlini said. "But that total is a fraction of what it would cost me to produce this myself – let alone an entire team."
Other lessons? "The compiler successfully builds many projects, but not all. It's not yet a drop-in replacement for a real compiler." Moreover, "the generated code is not very efficient."
He added that the Rust code quality is "reasonable but... nowhere near the quality of what an expert Rust programmer might produce."
Carlini concluded: "Agent teams show the possibility of implementing entire, complex projects autonomously."
But as a former pen-tester, he said fully autonomous development posed real risks. "The thought of programmers deploying software they've never personally verified is a real concern." Ultimately, the experiment "excites me, [but] also leaves me feeling uneasy."
Comments on GitHub were less equivocal, not least because they felt the $20K price tag ignored a few other elements, such as the vast amount of other programmers' code the model was trained on in the first place.
As mohswell put it: "If I went to the supermarket, stole a bit of every bread they had, and shoved it together, no one would say I made bread from scratch. They'd say I'm a thief. If this is 'from scratch,' then my cooking is farm-to-table."
While Sambit003 opined: "The comment section and the issue itself is 'absolute cinema' moment everyone living through😂... the longer the AI generated codes I see... the safer I feel. 😂 Still we have the jobs (for long enough years)... just enjoy the overhyping bruh."
Serkosal added plaintively: "okay, nice, could @claude find gf for me? No? I'm not interested."
https://www.zdnet.com/article/personal-digital-sovereignty-choices-free-linux-servers/
You may have noticed that many European Union (EU) governments and agencies, worried about ceding control to untrustworthy US companies, have been embracing digital sovereignty. Those bodies are turning to running their own cloud and services instead of relying on, say, Microsoft 365 or Google Workspace. If you prize your privacy and want to control your own services, you can take that approach as well.
Of course, if you're a techie's techie, you could always run your own cloud. I've been running my own servers for decades. These days, I use AlmaLinux, Rocky Linux, and Ubuntu on my machines.
However, most people don't have many years of Unix/Linux system administration behind them. Fortunately, there are pre-built Linux servers suitable for home and small-business users. With these servers, you still need to be a power user to get the most out of them, but they don't require you to be a Linux expert.
There are three types of ready-to-run Linux server distributions. The first are those that provide software-as-a-service (SaaS) addons and programs. Then there are the distros that focus on providing file server/storage services. Finally, believe it or not, there's one approach meant to replace Windows Server.
1. The privacy-first approach: FreedomBox
FreedomBox, the project initiated by Free Software Foundation (FSF) legal expert Eben Moglen, has matured into Debian's official self-hosting solution.
As Moglen said when he introduced FreedomBox in 2011, "We're building software for smart devices whose engineered purpose is to work together to facilitate free communication among people, safely and securely, beyond the ambition of the strongest power to penetrate. They can make freedom of thought and information a permanent, ineradicable feature of the net that holds our souls."
The platform is now integrated as Debian Linux Blend. This approach enables you to transform a fresh Debian installation into a privacy-focused server via the Plinth web interface.
2. YunoHost: Self-hosting democratized
YunoHost is best described as a "make self‑hosting boring" layer on top of Debian. As its volunteer creators say, "YunoHost is primarily designed for people who want things to 'just work.'"
Similar to Freedom Box, YunoHost functions as both a standalone operating system and a package you can install on an existing Debian installation. Unlike FreedomBox, which can be scaled up for a small business, the YunoHost crew warns, "YunoHost is not designed to 'scale' in the traditional sense. It is intended for a relatively modest number of user accounts and simultaneous users." So, a few dozen users? No problem. A few hundred? No, just no.
YunoHost comes with a small, integrated server stack. Everything else is added from its catalog. On a fresh YunoHost install, you get these main components by default: a web admin interface and a user portal for installing and logging in to all the applications. This setup is supported by Nginx as the web server and reverse proxy, with SSOwat for single sign-on to all installed web apps.
You can also install an email server stack from the start. Your default programs are Postfix for the Simple Mail Transfer Protocol (SMTP) server, Dovecot as the Internet Message Access Protocol (IMAP) server, and Rspamd, with DomainKeys Identified Mail (DKIM) handling for spam filtering and mail authentication. As e-mail server programs go, these are the easiest to manage, and YunoHost does a great job of installing them.
However, speaking as someone who's been running email servers for decades, setting them up and managing them on the internet is hard work. You'll need to set up a proper domain, DNS records (MX, SPF, DKIM, DMARC) with a static IP address. If your eyes just glazed over, don't try running your own email server.
Like FreedomBox, YunoHost is completely free.
3. TrueNAS: The network storage server
iXsystems' TrueNAS Community Edition is the free, open‑source edition of the TrueNAS storage OS for x86 hardware. This technology turns a PC or server into a dedicated NAS appliance built around OpenZFS. It's effectively the "DIY" version of the same codebase TrueNAS uses in its paid appliances, just without commercial support and with some enterprise features held back.
Unlike the other TrueNAS, this community edition isn't a general-purpose server. It's best used for when you want a storage‑first home or small‑business box. I use my edition for video storage for my Jellyfin media server. With a couple of terabytes of 1930s through 1950s movies, I need all the help I can get. This system is also very useful for virtual machine images and massive database storage.
The community edition is also very useful for small-office NAS jobs, such as sharing files over SMB/NFS to Windows and Linux PCs. The system also works great for backups and archival storage.
TrueNAS is also available for free. If you want to use it in a business, though, you can buy TrueNAS Enterprise on an iXsystems rack server. This comes with high-availability (HA) features and commercial support. Its pricing is quote‑based and not listed as a flat fee. TrueNAS reseller prices for a low-end TrueNAS X10 2U Unified Storage Appliance with 20TB of raw capacity begin at $15,000,
4. Rockstor: BTRFS-powered NAS
Rockstor is another NAS Linux. This system differs from TrueNAS by building on the B-tree file system (BTRFS), a modern copy-on-write (CoW) filesystem for Linux designed for high scalability, fault tolerance, and ease of administration.
Rockstor supports advanced features like snapshots, data compression, and built-in RAID. The system is for users who want storage flexibility without enterprise complexity.
Now built on openSUSE, Rockstor supports both x86_64 and ARM64 architectures, including the Raspberry Pi 4 and RPi 400.
5. Zentyal: Windows server alternative
If you're running a small Windows-based business or you've worked as a Windows network administrator, you might want to give Zentyal a try. Zentyal 8.0 is based on Ubuntu Server 22.04 LTS. This SMB server targets organizations seeking to replace Microsoft Windows Server without disrupting existing workflows.
Zentyal comes with native Active Directory (AD) compatibility, which enables:
- Seamless Windows client domain joining.
- Group Policy Object management through RSAT.
- No Client Access License requirements.
- Integration with existing Windows domains as an additional domain controller.
Beyond directory services, Zentyal includes:
- SMTP and POP3/IMAP mail servers with ActiveSync and webmail.
- Gateway services, with firewall, IDS/IPS (Suricata), and HTTP proxy.
- VPN capabilities via OpenVPN and IPSec/L2TP.
- DNS, DHCP, NTP, and CA services.
Zentyal is available as a free "Development Edition," the community edition that you can download as an ISO or install on top of Ubuntu Server/Desktop using their installer script. However, you're on your own for support. If you're not already a Microsoft Certified: Windows Server Hybrid Administrator Associate, this operating system isn't for you.
If you want to use Zentyal in business, pricing starts at $230 per server per year, with support for up to 25 users.
[...] Taken together, these projects show Linux reclaiming the low‑end server market it helped create, but on very different terms than in the Linux, Apache. MySQL, Python/Perl/PHP (LAMP) era. Instead of expecting a part‑time admin to assemble services piece by piece, these server distros ship as curated appliances with opinionated defaults, auto‑updates, and catalog‑style app install flows
The era of depending on third-party cloud services is yielding to practical self-hosting alternatives. Whether prioritizing privacy, collaboration, storage, or network services, the Linux ecosystem now offers mature, well-maintained options for users willing to invest a modest amount of technical effort in exchange for data sovereignty.
In a breakthrough that could reshape how tools for harsh environments are made, scientists at Hiroshima University have developed a method to 3D print one of the toughest materials used in industry: tungsten carbide – cobalt. The advance overcomes a long-standing challenge in additive manufacturing – how to shape ultra-hard composites without damaging their internal structure.
The university's team reports that their approach centers on controlled "softening" of the material rather than complete melting. The process, known as hot-wire laser irradiation, reshapes tungsten carbide while maintaining its exceptional hardness and minimizing defects – an achievement that could transform how cutting, drilling, and construction tools are manufactured.
Unlike most 3D printing workflows, which rely on fully melting metal powders or rods, the Hiroshima group used a laser to heat tungsten carbide rods just enough to make them pliable. This prevented grain growth and decomposition that often occur at full melting temperatures.
To bond multiple printed layers securely, researchers added a nickel-based alloy as an intermediate layer within the build. The result: dense parts with a measured surface hardness exceeding 1,400 HV, approaching the hardness of gemstones like sapphire.
Assistant Professor Keita Marumoto of Hiroshima University's Graduate School of Advanced Science and Engineering described the technique as an entirely new approach to forming metallic materials. He noted that, while the current work focused on cemented carbides such as WC – Co, the same principle could potentially apply to other difficult-to-manufacture compounds.
Traditional approaches involve sintering powdered materials in molds, which limits geometrical complexity and generates substantial waste. Additive manufacturing could, in theory, solve both problems – if the material could survive the process.
While the achievement represents a leap forward, the research group acknowledges that their work remains ongoing. They are fine-tuning the process to eliminate occasional cracking and plan to test how far the technique can be extended to more intricate geometries.
If successful, additive manufacturing could soon produce complex industrial tools that combine durability with material efficiency – an outcome long out of reach for engineers working with ultra-hard composites.
Ford Motor Company on Feb. 10 reported fourth-quarter revenue 2025 of $45.9 billion, a 5 percent year-over-year decline that led to its largest earnings miss since the same quarter in 2021:
Ford posted a net loss of $11.1 billion in the quarter and earnings per share of $0.13, well below analysts' forecast of $0.18. In the year-ago quarter, Ford posted net income of $1.8 billion and earnings per share of $0.45. The Dearborn, Michigan-based automaker's full-year revenue of $187.3 billion was up from $185 billion in 2024, marking the fifth straight year of revenue growth despite the challenging fourth quarter.
Its net loss for the year, however, was $8.2 billion, versus net income of $5.9 billion in 2024.
Ford CEO Jim Farley said during a conference call with analysts that the impact from a fire at the Novelis aluminum plant in Oswego, New York—a major aluminum supplier for the automaker's F-series pickup trucks—and unexpected changes to tariff credits for auto parts resulted in costs of roughly $4 billion.
[...] Ford also provided full-year guidance for 2026 of adjusted earnings before interest and taxes of $8–10 billion, up from the $6.8 billion reported in 2025, and in line with the FactSet analyst estimate of $8.78 billion.
From Road & Track:
Ford is not alone in its decision to take a step back from its lofty plans for electric vehicles, as the entire auto industry grapples with slowing demand for battery-powered cars and trucks, but a recent financial report from the Dearborn-based automaker spells out just how painful the situation has been for the company's bank accounts.
Related:
Four baby planets show how super-Earths and sub-Neptunes form:
Thanks to the discovery of thousands of exoplanets to date, we know that planets bigger than Earth but smaller than Neptune orbit most stars. Oddly, our sun lacks such a planet. That's been a source of frustration for planetary scientists, who can't study them in as much detail as they'd like, leaving one big question: How did these planets form?
Now we know the answer.
An international team of astrophysicists from UCLA and elsewhere has witnessed four baby planets in the V1298 Tau system in the process of becoming super-Earths and sub-Neptunes. The findings are published in the journal Nature.
"I'm reminded of the famous 'Lucy' fossil, one of our hominid ancestors that lived 3 million years ago and was one of the 'missing links' between apes and humans," said UCLA professor of physics and astronomy and second author Erik Petigura. "V1298 Tau is a critical link between the star- and planet-forming nebulae we see all over the sky, and the mature planetary systems that we have now discovered by the thousands."
Planets form when a cloud of gas and dust, called a nebula, contracts under the force of gravity into a young star and a swirling disk of matter called a protoplanetary disk. Planets form from this disk of gas, but it's a messy process. There are many ways a planet can grow or shrink in size during its infancy --- a period of a few hundred million years. This led to major questions about why so many mature planets were between the sizes of Earth and Neptune.
The star V1298 Tau is only about 20 million years old compared to our 4.5-billion-year-old sun. Expressed in human terms, it's equivalent to a 5-month-old baby. Four giant, rapidly evolving planets between the sizes of Neptune and Jupiter orbit the star, but unlike growing babies, the new research shows that these planets are contracting in size and are steadily losing their atmospheres. Petigura and co-author Trevor David at the Flatiron Institute led the team that first discovered the planets in 2019.
"What's so exciting is that we're seeing a preview of what will become a very normal planetary system," said John Livingston, the study's lead author from the Astrobiology Center in Tokyo, Japan. "The four planets we studied will likely contract into 'super-Earths' and 'sub-Neptunes'—the most common types of planets in our galaxy, but we've never had such a clear picture of them in their formative years."
[...] Once they sorted out the shapes and timing of the orbits of the four planets, the researchers could make sense of how the planets tugged on each other due to gravity, sometimes slowing down and sometimes speeding up, and leading to transits, sometimes occurring early and other times late. These transit and timing variations allowed the team to measure the masses of all four planets for the first time, which is akin to weighing them.
The shocking result? Despite being 5 to 10 times the radius of Earth, the planets had masses only 5 to 15 times larger than Earth. This means they are very low-density, comparable to Styrofoam, whereas the Earth has the density of rock.
"The unusually large radii of young planets led to the hypothesis that they have very low densities, but this had never been measured," said Trevor David, a co-author from the Flatiron Institute who led the initial discovery of the system in 2019. "By weighing these planets for the first time, we have provided the first observational proof. They are indeed exceptionally 'puffy,' which gives us a crucial, long-awaited benchmark for theories of planet evolution."
"Our measurements reveal they are incredibly lightweight — some of the least dense planets ever found. It's a critical step that turns a long-standing theory about how planets mature into an observed reality," said Livingston.
[...] "These planets have already undergone a dramatic transformation, rapidly losing much of their original atmospheres and cooled faster than what we'd expect from standard models," said James Owen, a co-author from Imperial College London who led the theoretical modeling. "But they're still evolving. Over the next few billion years, they will continue to lose their atmosphere and shrink significantly, transforming into the compact systems of super-Earths and sub-Neptunes we see throughout the galaxy."
Journal Reference: Livingston, J.H., Petigura, E.A., David, T.J. et al. A young progenitor for the most common planetary systems in the Galaxy. Nature 649, 310–314 (2026). https://doi.org/10.1038/s41586-025-09840-z
Visual Studio Code extension faces March shutdown with no transition guidance:
Microsoft has abruptly announced the deprecation of Polyglot Notebooks with less than two months' notice, throwing the future of the .NET Interactive project into doubt.
The deprecation will come into effect on March 27, whereupon bug fixes and support will cease, and no new features will be added. However, the extension won't be automatically uninstalled from a user's Visual Studio Code installation.
Polyglot Notebooks is an important element of the Microsoft .NET Interactive project, which Microsoft describes as "an engine and API for running and editing code interactively." .NET Interactive can run as a kernel for notebooks and "enables a polyglot (multi-language) notebook experience," according to Microsoft. "For the best experience when working with multi-language notebooks, we recommend installing the Polyglot Notebooks extension for Visual Studio Code."
That recommendation presumably remains in place until Microsoft pulls the plug.
The deprecation announcement was made in the project's GitHub repository and the thread was locked, limiting conversation. However, users were quick to raise additional issues, questioning the reasoning behind the deprecation and the short time frame.
One pointed out the Polyglot Notebooks extension in Visual Studio Code was Microsoft's recommendation for data analysts, since Azure Data Studio is retiring at the end of this month. Microsoft's reaction was to remove the recommendation.
It appears the author of the Azure Data Studio retirement documentation was unaware of the impending doom facing the Polyglot Notebooks extension. An individual claiming to be the author posted: "As a result of the deprecation announcement for Polyglot Notebooks, I am legally bound to remove that recommendation from the Azure Data Studio article, because it would mislead customers to keep it in."
Which is true. However, as another user noted: "Removing that documentation from the Azure Data Studio page – and giving no transition path at all for those users (like myself) who depend on those Azure Data Studio features – seems a pretty user-hostile approach. We've already followed Microsoft's transition guidance once and ended up in this situation. Should we now look elsewhere for this functionality?"
The short notice and mixed messaging speaks more of dysfunctional management and communication within Microsoft than anything else. If only there were some tool at the company's disposal for Teams to communicate and collaborate.
We'll give the final word to another user reacting to the deprecation announcement, who said: "This is just another dark day for Microsoft customers, and the decision makers are nowhere to be seen taking accountability for the impact of their decisions."
In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine's detailed story guidelines into an AI and sent in the results. And they weren't alone:
This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end can't keep up.
This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor, as are academic journals. Lawmakers are inundated with AI-generated constituent comments. Courts around the world are flooded with AI-generated filings, particularly by people representing themselves. AI conferences are flooded with AI-generated research papers. Social media is flooded with AI posts. In music, open source software, education, investigative journalism and hiring, it's the same story.
Like Clarkesworld's initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI.
[...] These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects. Society suffers if the courts are clogged with frivolous, AI-manufactured cases. There is also harm if the established measures of academic performance – publications and citations – accrue to those researchers most willing to fraudulently submit AI-written letters and papers rather than to those whose ideas have the most impact. The fear is that, in the end, fraudulent behavior enabled by AI will undermine systems and institutions that society relies on.
TFA goes on to discuss the upsides of AI, how AI makes fraud easier, and some ideas on balancing harms with benefits. Originally spotted on Schneier on Security.
Dispute erupts between popular web archive and independent blogger:
Archive.today, also known as Archive.is and Archive.ph, has gained notoriety in recent years as a useful tool for archiving web pages and bypassing paywalls. However, the site's CAPTCHA page currently weaponizes visitor traffic in a DDoS campaign against a blogger who attempted to unmask Archive.today's mysterious operator(s). The behavior has prompted Wikipedia editors to debate whether to ban the archive site, which might be living on borrowed time and underpins hundreds of thousands of Wikipedia citations.
Wikipedia relies heavily on Archive.today because it is more effective than conventional alternatives, such as the Internet Archive. However, the properties that have made Archive.today so useful have also drawn the attention of the FBI, likely because the site circumvents the paywalls of numerous prominent media outlets.
In contrast with the Internet Archive, which is legally sanctioned and complies with takedown requests, Archive.today follows no such rules, and its creator remains anonymous. Its advanced scraping methods and free-wheeling nature have turned it into a repository for sources that are likely available nowhere else. If the site were to enter Wikipedia's blacklist, which occurred once from 2013 to 2016, nearly 700,000 citation links would become useless, and many would likely never be repaired.
The discussion arose after Archive.today used its CAPTCHA page to direct DDoS traffic toward blogger Jani Patokallio, who posted an inconclusive investigation into the site's origins in 2023. However, the blog did not draw much attention until 2025, when various outlets cited it while reporting on the FBI's investigation into Archive.today.
The CAPTCHA page currently contains code (pictured below) that drives requests to the search function of Patokallio's blog, meaning that every Wikipedia citation leading to Archive.today could potentially contribute to the DDoS attack. However, Patokallio claims that the attack has caused no real harm. Visiting the page with uBlock Origin installed also seems to neutralize the offending code.
[...] Wikipedia is currently weighing three options to address the issue: retaining the status quo, removing all links, or discouraging future citations while keeping existing links. Some also argue that pivoting away from Archive.today is prudent regardless of the current dispute due to the site's inherently precarious existence. In 2021, Archive.today's creator admitted that it is "doomed to die at any moment."
Another quarter, another gain for AMD:
AMD ended 2025 with fanfare as it managed to increase its market shares across all major CPU product segments, according to Mercury Research, and achieved a 29.2% share of all x86 processors shipped in the fourth quarter, which is an all-time record for the company. The company now controls its highest unit share across desktop, laptop, and server CPU markets while also capturing the most lucrative parts of these markets, and now controls 35.4% of x86 CPU revenue share.
In the client PC segment, AMD finished 2025 with one of its strongest quarters ever, partly because Intel struggled to get enough client silicon from its own fabs and from TSMC, but to a large degree because of highly competitive desktop CPUs and meticulously calculated mobile CPU lineup.
AMD's client CPU unit share rose to 29.2% in Q4 2025, up 3.8% quarter-over-quarter (QoQ) and 4.6% year-over-year (YoY), driven by sales of both desktop and mobile offerings.
Intel remained the clear volume leader with about 70.8% of client CPU shipments, which is a sharp decline both sequentially and compared to the same quarter a year ago, which is not surprising as Intel had to reassign its internal manufacturing capacities to produce server CPUs instead of client silicon and could not get enough silicon from TSMC.
What is perhaps more alarming for Intel is that its client PC CPU revenue share declined to 68.8%, allowing AMD to control 31.2% of the dollar share of PC processor sales, up 2.9% QoQ and 7.4% YoY. This reflects AMD's higher average selling prices (ASPs), stronger sales of premium desktop and notebook processors, and continued gains in higher-margin segments.
Intel admits that it is hard to compete against AMD with its current lineup and hopes that things will begin to change in late 2026 – 2027, which means that AMD will likely continue to enjoy eating Intel's lunch in the coming quarters.
On Tuesday night, the Federal Aviation Administration closed airspace up to 18,000 feet above the El Paso International Airport in Texas, saying the restrictions would be in place for 10 days. Then, less than 10 hours later, the federal agency reopened the airspace, allowing planes to land and take off at the busy airport.
About an hour after lifting the restrictions, US Secretary of Transportation Sean Duffy, whose responsibilities include overseeing the FAA, explained the unexpected closure by saying, "The FAA and DOW acted swiftly to address a cartel drone incursion."
[...]
Not everyone agrees with Duffy's account.Based upon reporting from The New York Times and other publications, the military has been developing high-energy lasers to bring down drones.
[...]
FAA had not resolved all of its concerns about airplane safety from the tests.Despite these apparently lingering concerns from the FAA, the military went ahead with a test earlier this week against what was thought to be a drone. The object was a party balloon.
[...]
One of the many lessons from the war in Ukraine, which has rapidly pushed forward drone technology in contested environments, is that it is not practical to shoot down drones with conventional missiles. So it is understandable that the US military is looking at alternatives. This all culminated in some sort of snafu between the FAA and military officials regarding coordination with this week's test.
[...]
action was taken without consulting local or state officials in Texas—who are understandably outraged
[...]
"I want to be very, very clear that this should've never happened," El Paso Mayor Renard Johnson said during a news conference on Wednesday. "That failure to communicate is unacceptable."
Relevant video from a commenter on the original article: 99 Luftballons [3:57 Ed]
https://nand2mario.github.io/posts/2026/80386_barrel_shifter/
I'm currently building an 80386-compatible core in SystemVerilog, driven by the original Intel microcode extracted from real 386 silicon. Real mode is now operational in simulation, with more than 10,000 single-instruction test cases passing successfully, and work on protected-mode features is in progress. In the course of this work, corners of the 386 microcode and silicon have been examined in detail; this series documents the resulting findings.
In the previous post, we looked at multiplication and division -- iterative algorithms that process one bit per cycle. Shifts and rotates are a different story: the 386 has a dedicated barrel shifter that completes an arbitrary multi-bit shift in a single cycle. What's interesting is how the microcode makes one piece of hardware serve all shift and rotate variants -- and how the complex rotate-through-carry instructions are handled.
https://www.theregister.com/2026/02/09/taiwan_us_chip_production/
Taiwan's vice-premier has ruled out relocating 40 percent of the country's semiconductor production to the US, calling the Trump administration's goal "impossible."
In an interview broadcast on the CTS channel, vice premier Cheng Li-chiun said she made clear to US officials that Taiwan's semiconductor ecosystem cannot be moved and its most advanced technologies will remain domestic.
"When it comes to 40 or 50 percent of production capacity being moved to the United States... I have made it very clear to the US side that this is impossible," she said, according to The Straits Times.
Cheng led Taiwan's January's trade delegation to Washington, which secured reduced US tariffs on Taiwanese goods - from 20 percent to 15 percent - in exchange for increased investment into America's tech sector.
At the time, US commerce secretary Howard Lutnick told CNBC the deal aimed to relocate 40 percent of Taiwan's entire chip manufacturing and production capacity to America.
A Department of Commerce release cast the agreement as a "massive reshoring of America's semiconductor sector."
Taiwan, which produces more than 60 percent of global semiconductors and roughly 90 percent of the world's most advanced chips, insists it gained this leadership position by investing in the tech when other countries didn't.
Former Intel chief Pat Gelsinger supports this view, publicly stating a couple of years ago that countries like Korea, Taiwan, and China put in place long-term industrial policies and investment in chipmaking, while the US and European nations failed to do the same.
Cheng reiterated this in her interview, saying that "an industrial ecosystem built up over decades cannot be relocated."
Taiwan views its semiconductor dominance as strategic defense against Chinese aggression. Beijing claims Taiwan as its territory and threatens reunification by force if necessary. Even Lutnick acknowledged this "silicon shield" dynamic last year, noting China's open ambitions:
"We need their silicon, the chips so badly that we'll shield them, we'll protect them."
TSMC considered relocating its chip fabs in 2024 due to China threats but decided against the idea given the difficulties.
Any Chinese invasion would devastate the global tech sector, as The Register pointed out recently. Most of Nvidia's GPUs are made in Taiwan, as are AMD's processors and Qualcomm's smartphone chips. The supply of these would be cut off by any invasion, and there is no other source these companies can easily turn to.